6 research outputs found

    Edge intelligence in smart grids : a survey on architectures, offloading models, cyber security measures, and challenges

    Get PDF
    The rapid development of new information and communication technologies (ICTs) and the deployment of advanced Internet of Things (IoT)-based devices has led to the study and implementation of edge computing technologies in smart grid (SG) systems. In addition, substantial work has been expended in the literature to incorporate artificial intelligence (AI) techniques into edge computing, resulting in the promising concept of edge intelligence (EI). Consequently, in this article, we provide an overview of the current state-of-the-art in terms of EI-based SG adoption from a range of angles, including architectures, computation offloading, and cybersecurity c oncerns. The basic objectives of this article are fourfold. To begin, we discuss EI and SGs separately. Then we highlight contemporary concepts closely related to edge computing, fundamental characteristics, and essential enabling technologies from an EI perspective. Additionally, we discuss how the use of AI has aided in optimizing the performance of edge computing. We have emphasized the important enabling technologies and applications of SGs from the perspective of EI-based SGs. Second, we explore both general edge computing and architectures based on EI from the perspective of SGs. Thirdly, two basic questions about computation offloading are discussed: what is computation offloading and why do we need it? Additionally, we divided the primary articles into two categories based on the number of users included in the model, either a single user or a multiple user instance. Finally, we review the cybersecurity threats with edge computing and the methods used to mitigate them in SGs. Therefore, this survey comes to the conclusion that most of the viable architectures for EI in smart grids often consist of three layers: device, edge, and cloud. In addition, it is crucial that computation offloading techniques must be framed as optimization problems and addressed effectively in order to increase system performance. This article typically intends to serve as a primer for emerging and interested scholars concerned with the study of EI in SGs.The Council for Scientific and Industrial Research (CSIR).https://www.mdpi.com/journal/jsanElectrical, Electronic and Computer Engineerin

    Development of Hybrid Automatic Segmentation Technique of a Single Leaf from Overlapping Leaves Image

    Get PDF
    The segmentation of a single leaf from an image with overlapping leaves is an important step towards the realization of effective precision agricultural systems. A popular approach used for this segmentation task is the hybridization of the Chan-Vese model and the Sobel operator CV-SO. This hybridized approach is popular because of its simplicity and effectiveness in segmenting a single leaf of interest from a complex background of overlapping leaves. However, the manual threshold and parameter tuning procedure of the CV-SO algorithm often degrades its detection performance. In this paper, we address this problem by introducing a dynamic iterative model to determine the optimal parameters for the CV-SO algorithm, which we dubbed the Dynamic CV-SO (DCV-SO) algorithm. This is a new hybrid automatic segmentation technique that attempts to improve the detection performance of the original hybrid CV-SO algorithm by reducing its mean error rate. The results obtained via simulation indicate that the proposed method yielded a 1.23% reduction in the mean error rate against the original CV-SO method

    Performance Analysis of Machine Learning Algorithms for Energy Demand–Supply Prediction in Smart Grids

    No full text
    The use of machine learning (ML) algorithms for power demand and supply prediction is becoming increasingly popular in smart grid systems. Due to the fact that there exist many simple ML algorithms/models in the literature, the question arises as to whether there is any significant advantage(s) among these different ML algorithms, particularly as it pertains to power demand/supply prediction use cases. Toward answering this question, we examined six well-known ML algorithms for power prediction in smart grid systems, including the artificial neural network, Gaussian regression (GR), k-nearest neighbor, linear regression, random forest, and support vector machine (SVM). First, fairness was ensured by undertaking a thorough hyperparameter tuning exercise of the models under consideration. As a second step, power demand and supply statistics from the Eskom database were selected for day-ahead forecasting purposes. These datasets were based on system hourly demand as well as renewable generation sources. Hence, when their hyperparameters were properly tuned, the results obtained within the boundaries of the datasets utilized showed that there was little/no significant difference in the quantitative and qualitative performance of the different ML algorithms. As compared to photovoltaic (PV) power generation, we observed that these algorithms performed poorly in predicting wind power output. This could be related to the unpredictable wind-generated power obtained within the time range of the datasets employed. Furthermore, while the SVM algorithm achieved the slightly quickest empirical processing time, statistical tests revealed that there was no significant difference in the timing performance of the various algorithms, except for the GR algorithm. As a result, our preliminary findings suggest that using a variety of existing ML algorithms for power demand/supply prediction may not always yield statistically significant comparative prediction results, particularly for sources with regular patterns, such as solar PV or daily consumption rates, provided that the hyperparameters of such algorithms are properly fine tuned

    Edge Intelligence in Smart Grids: A Survey on Architectures, Offloading Models, Cyber Security Measures, and Challenges

    No full text
    The rapid development of new information and communication technologies (ICTs) and the deployment of advanced Internet of Things (IoT)-based devices has led to the study and implementation of edge computing technologies in smart grid (SG) systems. In addition, substantial work has been expended in the literature to incorporate artificial intelligence (AI) techniques into edge computing, resulting in the promising concept of edge intelligence (EI). Consequently, in this article, we provide an overview of the current state-of-the-art in terms of EI-based SG adoption from a range of angles, including architectures, computation offloading, and cybersecurity concerns. The basic objectives of this article are fourfold. To begin, we discuss EI and SGs separately. Then we highlight contemporary concepts closely related to edge computing, fundamental characteristics, and essential enabling technologies from an EI perspective. Additionally, we discuss how the use of AI has aided in optimizing the performance of edge computing. We have emphasized the important enabling technologies and applications of SGs from the perspective of EI-based SGs. Second, we explore both general edge computing and architectures based on EI from the perspective of SGs. Thirdly, two basic questions about computation offloading are discussed: what is computation offloading and why do we need it? Additionally, we divided the primary articles into two categories based on the number of users included in the model, either a single user or a multiple user instance. Finally, we review the cybersecurity threats with edge computing and the methods used to mitigate them in SGs. Therefore, this survey comes to the conclusion that most of the viable architectures for EI in smart grids often consist of three layers: device, edge, and cloud. In addition, it is crucial that computation offloading techniques must be framed as optimization problems and addressed effectively in order to increase system performance. This article typically intends to serve as a primer for emerging and interested scholars concerned with the study of EI in SGs

    AutoElbow: An Automatic Elbow Detection Method for Estimating the Number of Clusters in a Dataset

    No full text
    The elbow technique is a well-known method for estimating the number of clusters required as a starting parameter in the K-means algorithm and certain other unsupervised machine-learning algorithms. However, due to the graphical output nature of the method, human assessment is necessary to determine the location of the elbow and, consequently, the number of data clusters. This article presents a simple method for estimating the elbow point, thus, enabling the K-means algorithm to be readily automated. First, the elbow-based graph is normalized using the graph’s minimum and maximum values along the ordinate and abscissa coordinates. Then, the distance between each point on the graph to the minimum (i.e., the origin) and maximum reference points, and the “heel” of the graph are calculated. The estimated elbow location is, thus, the point that maximizes the ratio of these distances, which corresponds to an approximate number of clusters in the dataset. We demonstrate that the strategy is effective, stable, and adaptable over different types of datasets characterized by small and large clusters, different cluster shapes, high dimensionality, and unbalanced distributions. We provide the clustering community with a description of the method and present comparative results against other well-known methods in the prior state of the art

    Simple deterministic selection-based genetic algorithm for hyperparameter tuning of machine learning models.

    No full text
    Hyperparameter tuning is a critical function necessary for the effective deployment of most machine learning (ML) algorithms. It is used to find the optimal hyperparameter settings of an ML algorithm in order to improve its overall output performance. To this effect, several optimization strategies have been studied for fine-tuning the hyperparameters of many ML algorithms, especially in the absence of model-specific information. However, because most ML training pro-cedures need a significant amount of computational time and memory, it is frequently necessary to build an optimization technique that converges within a small number of fitness evaluations. As a result, a simple deterministic selection genetic algorithm (SDSGA) is proposed in this article. The SDSGA was realized by ensuring that both chromosomes and their accompanying fitness values in the original genetic algorithm are selected in an elitist-like way. We assessed the SDSGA over a variety of mathematical test functions. It was then used to optimize the hyperparameters of two well-known machine learning models, namely, the convolutional neural network (CNN) and the random forest (RF) algorithm, with application on the MNIST and UCI classification datasets. The SDSGA’s efficiency was compared to that of the Bayesian Optimization (BO) and three other popular metaheuristic optimization algorithms (MOAs), namely, the genetic algorithm (GA), particle swarm optimization (PSO) and biogeography-based optimization (BBO) algorithms. The results obtained re-veal that the SDSGA performed better than the other MOAs in solving 11 of the 17 known benchmark functions considered in our study. While optimizing the hyperparameters of the two ML models, it performed marginally better in terms of accuracy than the other methods while taking less time to compute
    corecore